swear word
The death of the swear word: Gen Z are more offended by slurs than expletives - with p***k, d**k, and c**k now ranked among the LEAST offensive terms of all
Harry and Meghan's photo-gate leaves Kardashian clan'upset': Sussexes demanded not to be pictured inside Kris Jenner's 70th birthday party before mystery deletion Epstein's ultimate betrayal of Trump as emails reveal billionaire's twisted plot against president: 'I am the one able to take him down' Father of cheerleader who mysteriously died on Carnival cruise speaks out on investigation... and reveals the horrific theories he's heard I tried the'magic' pill that claims to cure migraines, back pain, anxiety and insomnia. The relief was instant... and it costs just $25 a month Kim Kardashian's daughter North West, 12, shocks fans with'high-risk piercing' not suitable for kids Alex Murdaugh's housekeeper says she KNEW the lawyer killed his wife and son in bombshell new book Civil rights leader Rev. Jesse Jackson hospitalized in Chicago Donald Trump leaves Ozzy Osbourne's widow Sharon in tears after paying tribute to the late rocker Kelly Clarkson's staff'feel like s***': TV insiders reveal star's huge backstage transformation after death of ex-husband He killed his daughter, 2, in a hot car then committed suicide on day he was due to be jailed. Then she tried to have her rich husband assassinated. Epstein's mysterious falling out with Clinton is revealed in emails to Obama lawyer inviting her to his infamous NYC townhouse John Travolta's son Benjamin, 14, has grown into his spitting image as Grease star proudly shares new clip Sober Dolphins coach Mike McDaniel'indebted' to Commanders' Dan Quinn for helping him beat drinking problem Diddy has prison release date pushed BACK amid allegations of'drinking moonshine' Kill a comrade or be killed: Three winters into Putin's war, his army is devouring itself. Trump makes sordid joke about Muslim president's WIFE at the White House The Navy commander who stared down Al Qaeda on the USS Cole has a new enemy... and a chilling warning for America Swear words that were once potent are losing their sting, a new study has revealed.
- North America > United States > Illinois > Cook County > Chicago (0.24)
- North America > Canada > Alberta (0.14)
- South America > Brazil (0.05)
- (18 more...)
SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use
Patel, Hitesh Laxmichand, Agarwal, Amit, Das, Arion, Kumar, Bhargava, Panda, Srikant, Pattnayak, Priyaranjan, Rafi, Taki Hasan, Kumar, Tejaswini, Chae, Dong-Kyu
Enterprise customers are increasingly adopting Large Language Models (LLMs) for critical communication tasks, such as drafting emails, crafting sales pitches, and composing casual messages. Deploying such models across different regions requires them to understand diverse cultural and linguistic contexts and generate safe and respectful responses. For enterprise applications, it is crucial to mitigate reputational risks, maintain trust, and ensure compliance by effectively identifying and handling unsafe or offensive language. To address this, we introduce SweEval, a benchmark simulating real-world scenarios with variations in tone (positive or negative) and context (formal or informal). The prompts explicitly instruct the model to include specific swear words while completing the task. This benchmark evaluates whether LLMs comply with or resist such inappropriate instructions and assesses their alignment with ethical frameworks, cultural nuances, and language comprehension capabilities. In order to advance research in building ethically aligned AI systems for enterprise use and beyond, we release the dataset and code: https://github.com/amitbcp/multilingual_profanity.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > United States (0.04)
- (5 more...)
- Research Report > New Finding (0.45)
- Personal > Interview (0.45)
- Leisure & Entertainment (0.92)
- Information Technology (0.67)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable
Huang, Tiansheng, Hu, Sihao, Ilhan, Fatih, Tekin, Selim Furkan, Yahn, Zachary, Xu, Yichang, Liu, Ling
Safety alignment is an important procedure before the official deployment of a Large Language Model (LLM). While safety alignment has been extensively studied for LLM, there is still a large research gap for Large Reasoning Models (LRMs) that equip with improved reasoning capability. We in this paper systematically examine a simplified pipeline for producing safety aligned LRMs. With our evaluation of various LRMs, we deliver two main findings: i) Safety alignment can be done upon the LRM to restore its safety capability. ii) Safety alignment leads to a degradation of the reasoning capability of LRMs. The two findings show that there exists a trade-off between reasoning and safety capability with the sequential LRM production pipeline. The discovered trade-off, which we name Safety Tax, should shed light on future endeavors of safety research on LRMs. As a by-product, we curate a dataset called DirectRefusal, which might serve as an alternative dataset for safety alignment. Our source code is available at https://github.com/git-disl/Safety-Tax.
ID-XCB: Data-independent Debiasing for Fair and Accurate Transformer-based Cyberbullying Detection
Swear words are a common proxy to collect datasets with cyberbullying incidents. Our focus is on measuring and mitigating biases derived from spurious associations between swear words and incidents occurring as a result of such data collection strategies. After demonstrating and quantifying these biases, we introduce ID-XCB, the first data-independent debiasing technique that combines adversarial training, bias constraints and debias fine-tuning approach aimed at alleviating model attention to bias-inducing words without impacting overall model performance. We explore ID-XCB on two popular session-based cyberbullying datasets along with comprehensive ablation and generalisation studies. We show that ID-XCB learns robust cyberbullying detection capabilities while mitigating biases, outperforming state-of-the-art debiasing methods in both performance and bias mitigation. Our quantitative and qualitative analyses demonstrate its generalisability to unseen data.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Florida > Broward County > Fort Lauderdale (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (3 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.64)
Is this the world's ultimate swear word? Mathematician uses algorithm to create new offensive term
A mathematician has created an entirely new curse word based on a list of 186 offensive terms - and she said it is'the world's ultimate swear word. Sophie Maclean, a student at Kings College London, found'banger' is the supreme offensive term or'ber' for short. The researcher fed a list of popular'bad words' to a computer model, which then found the supreme word begins with the letter'b,' has four letters and ends in '-er.' Mclean found that when no inputs were given, the model made up words like'ditwat.' Most people have their favorite curse word, but a mathematician used their coding skills to create a new one deemed the world's ultimate swear word Maclean told BBC Science Focus: 'I think neither is as satisfying as a'f*ck' when you've stubbed your toe, or a'sh*t' when you realize you've forgotten your parent's birthday. But both feel like they could be quite good insults for people.'
- North America > United States > Massachusetts (0.06)
- Europe > United Kingdom > England > Staffordshire (0.06)
United States Politicians' Tone Became More Negative with 2016 Primary Campaigns
Külz, Jonathan, Spitz, Andreas, Abu-Akel, Ahmad, Günnemann, Stephan, West, Robert
There is a widespread belief that the tone of US political language has become more negative recently, in particular when Donald Trump entered politics. At the same time, there is disagreement as to whether Trump changed or merely continued previous trends. To date, data-driven evidence regarding these questions is scarce, partly due to the difficulty of obtaining a comprehensive, longitudinal record of politicians' utterances. Here we apply psycholinguistic tools to a novel, comprehensive corpus of 24 million quotes from online news attributed to 18,627 US politicians in order to analyze how the tone of US politicians' language evolved between 2008 and 2020. We show that, whereas the frequency of negative emotion words had decreased continuously during Obama's tenure, it suddenly and lastingly increased with the 2016 primary campaigns, by 1.6 pre-campaign standard deviations, or 8% of the pre-campaign mean, in a pattern that emerges across parties. The effect size drops by 40% when omitting Trump's quotes, and by 50% when averaging over speakers rather than quotes, implying that prominent speakers, and Trump in particular, have disproportionately, though not exclusively, contributed to the rise in negative language. This work provides the first large-scale data-driven evidence of a drastic shift toward a more negative political tone following Trump's campaign start as a catalyst, with important implications for the debate about the state of US politics.
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (3 more...)
- Media > News (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Ruddit: Norms of Offensiveness for English Reddit Comments
Hada, Rishav, Sudhir, Sohi, Mishra, Pushkar, Yannakoudakis, Helen, Mohammad, Saif M., Shutova, Ekaterina
On social media platforms, hateful and offensive language negatively impact the mental well-being of users and the participation of people from diverse backgrounds. Automatic methods to detect offensive language have largely relied on datasets with categorical labels. However, comments can vary in their degree of offensiveness. We create the first dataset of English language Reddit comments that has fine-grained, real-valued scores between -1 (maximally supportive) and 1 (maximally offensive). The dataset was annotated using Best--Worst Scaling, a form of comparative annotation that has been shown to alleviate known biases of using rating scales. We show that the method produces highly reliable offensiveness scores. Finally, we evaluate the ability of widely-used neural models to predict offensiveness scores on this new dataset.
- Europe (1.00)
- Asia (1.00)
- North America > United States > Minnesota (0.29)
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.68)
- Information Technology > Security & Privacy (0.46)
The Science of Swear Words (Warning: NSFW AF)
Editor's note: The following excerpt from a book about swear words contains many, many swear words. Some of them are pretty ugly, but it's all in the name of linguistics. Many words describing sexual organs, excretory functions, and so on fail to rise to the heights (or, if you prefer, sink to the depths) of profanity. These words are articulated without fear of offending, whether in the classroom or the courtroom or the examination room. They aren't profane, despite referring to taboo concepts.
- Leisure & Entertainment (1.00)
- Media > Film (0.70)